#Meta Content Moderation
Explore tagged Tumblr posts
childrenofthedigitalage · 5 months ago
Text
Meta's Content Moderation Changes: Why Ireland Must Act Now
Meta’s Content Moderation Changes: Why Ireland Must Act Now The recent decision by Meta to end third-party fact-checking programs on platforms like Facebook, Instagram, and Threads has sent shockwaves through online safety circles. For a country like Ireland, home to Meta’s European headquarters, this is more than just a tech policy shift—it’s a wake-up call. It highlights the urgent need for…
1 note · View note
saywhat-politics · 5 months ago
Text
Tumblr media
Meta rolled out a number of changes to its “Hateful Conduct” policy Tuesday as part of a sweeping overhaul of its approach toward content moderation.
META announced a series of major updates to its content moderation policies today, including ending its fact-checking partnerships and “getting rid” of restrictions on speech about “topics like immigration, gender identity and gender” that the company describes as frequent subjects of political discourse and debate. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” Meta’s newly appointed chief global affairs officer, Joel Kaplan, wrote in a blog post outlining the changes.
In an accompanying video, Meta CEO Mark Zuckerberg described the company’s current rules in these areas as “just out of touch with mainstream discourse.”
100 notes · View notes
mostlysignssomeportents · 1 year ago
Text
CDA 230 bans Facebook from blocking interoperable tools
Tumblr media
I'm touring my new, nationally bestselling novel The Bezzle! Catch me TONIGHT (May 2) in WINNIPEG, then TOMORROW (May 3) in CALGARY, then SATURDAY (May 4) in VANCOUVER, then onto Tartu, Estonia, and beyond!
Tumblr media
Section 230 of the Communications Decency Act is the most widely misunderstood technology law in the world, which is wild, given that it's only 26 words long!
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
CDA 230 isn't a gift to big tech. It's literally the only reason that tech companies don't censor on anything we write that might offend some litigious creep. Without CDA 230, there'd be no #MeToo. Hell, without CDA 230, just hosting a private message board where two friends get into serious beef could expose to you an avalanche of legal liability.
CDA 230 is the only part of a much broader, wildly unconstitutional law that survived a 1996 Supreme Court challenge. We don't spend a lot of time talking about all those other parts of the CDA, but there's actually some really cool stuff left in the bill that no one's really paid attention to:
https://www.aclu.org/legal-document/supreme-court-decision-striking-down-cda
One of those little-regarded sections of CDA 230 is part (c)(2)(b), which broadly immunizes anyone who makes a tool that helps internet users block content they don't want to see.
Enter the Knight First Amendment Institute at Columbia University and their client, Ethan Zuckerman, an internet pioneer turned academic at U Mass Amherst. Knight has filed a lawsuit on Zuckerman's behalf, seeking assurance that Zuckerman (and others) can use browser automation tools to block, unfollow, and otherwise modify the feeds Facebook delivers to its users:
https://knightcolumbia.org/documents/gu63ujqj8o
If Zuckerman is successful, he will set a precedent that allows toolsmiths to provide internet users with a wide variety of automation tools that customize the information they see online. That's something that Facebook bitterly opposes.
Facebook has a long history of attacking startups and individual developers who release tools that let users customize their feed. They shut down Friendly Browser, a third-party Facebook client that blocked trackers and customized your feed:
https://www.eff.org/deeplinks/2020/11/once-again-facebook-using-privacy-sword-kill-independent-innovation
Then in in 2021, Facebook's lawyers terrorized a software developer named Louis Barclay in retaliation for a tool called "Unfollow Everything," that autopiloted your browser to click through all the laborious steps needed to unfollow all the accounts you were subscribed to, and permanently banned Unfollow Everywhere's developer, Louis Barclay:
https://slate.com/technology/2021/10/facebook-unfollow-everything-cease-desist.html
Now, Zuckerman is developing "Unfollow Everything 2.0," an even richer version of Barclay's tool.
This rich record of legal bullying gives Zuckerman and his lawyers at Knight something important: "standing" – the right to bring a case. They argue that a browser automation tool that helps you control your feeds is covered by CDA(c)(2)(b), and that Facebook can't legally threaten the developer of such a tool with liability for violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, or the other legal weapons it wields against this kind of "adversarial interoperability."
Writing for Wired, Knight First Amendment Institute at Columbia University speaks to a variety of experts – including my EFF colleague Sophia Cope – who broadly endorse the very clever legal tactic Zuckerman and Knight are bringing to the court.
I'm very excited about this myself. "Adversarial interop" – modding a product or service without permission from its maker – is hugely important to disenshittifying the internet and forestalling future attempts to reenshittify it. From third-party ink cartridges to compatible replacement parts for mobile devices to alternative clients and firmware to ad- and tracker-blockers, adversarial interop is how internet users defend themselves against unilateral changes to services and products they rely on:
https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
Now, all that said, a court victory here won't necessarily mean that Facebook can't block interoperability tools. Facebook still has the unilateral right to terminate its users' accounts. They could kick off Zuckerman. They could kick off his lawyers from the Knight Institute. They could permanently ban any user who uses Unfollow Everything 2.0.
Obviously, that kind of nuclear option could prove very unpopular for a company that is the very definition of "too big to care." But Unfollow Everything 2.0 and the lawsuit don't exist in a vacuum. The fight against Big Tech has a lot of tactical diversity: EU regulations, antitrust investigations, state laws, tinkerers and toolsmiths like Zuckerman, and impact litigation lawyers coming up with cool legal theories.
Together, they represent a multi-front war on the very idea that four billion people should have their digital lives controlled by an unaccountable billionaire man-child whose major technological achievement was making a website where he and his creepy friends could nonconsensually rate the fuckability of their fellow Harvard undergrads.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/02/kaiju-v-kaiju/#cda-230-c-2-b
Tumblr media
Image: D-Kuru (modified): https://commons.wikimedia.org/wiki/File:MSI_Bravo_17_(0017FK-007)-USB-C_port_large_PNr%C2%B00761.jpg
Minette Lontsie (modified): https://commons.wikimedia.org/wiki/File:Facebook_Headquarters.jpg
CC BY-SA 4.0: https://creativecommons.org/licenses/by-sa/4.0/deed.en
246 notes · View notes
tomorrowusa · 6 months ago
Text
Being a content moderator on Facebook can give you severe PTSD.
Let's take time from our holiday festivities to commiserate with those who have to moderate social media. They witness some of the absolute worst of humanity.
More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism. The moderators worked eight- to 10-hour days at a facility in Kenya for a company contracted by the social media firm and were found to have PTSD, generalised anxiety disorder (GAD) and major depressive disorder (MDD), by Dr Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Nairobi. The mass diagnoses have been made as part of lawsuit being brought against Facebook’s parent company, Meta, and Samasource Kenya, an outsourcing company that carried out content moderation for Meta using workers from across Africa. The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.
You can imagine what now gets circulated on Elon Musk's Twitter/X which has ditched most of its moderation.
According to the filings in the Nairobi case, Kanyanya concluded that the primary cause of the mental health conditions among the 144 people was their work as Facebook content moderators as they “encountered extremely graphic content on a daily basis, which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse, horrific violent actions just to name a few”. Four of the moderators suffered trypophobia, an aversion to or fear of repetitive patterns of small holes or bumps that can cause intense anxiety. For some, the condition developed from seeing holes on decomposing bodies while working on Facebook content.
Being a social media moderator may sound easy, but you will never be able to unsee the horrors which the dregs of society wish to share with others.
To make matters worse, the moderators in Kenya were paid just one-eighth what moderators in the US are paid.
Social media platform owners have vast wealth similar to the GDPs of some countries. They are among the greediest leeches in the history of money.
37 notes · View notes
bravoechoes · 1 month ago
Text
I initially thought the entity plot in DR was extremely goofy, like what do you mean a computer “controls the very truth itself” (<- extremely annoying grad student voice), and what a hysterical thing for the CIA of all agencies to be concerned about, but honestly I’m on board with it now because after rewatching dead reckoning again, it is so clearly a movie responding to concerns about like, digitally mediated medical + political disinformation during covid, and the fact that the background tension animating the film is that every government wants to get control of the entity in order to centralise and use this disinformation mechanism does feel very much like a response to the political responses to the pandemic (particularly from the US/UK/Canada/etc). obviously I’m not expecting any deeper engagement with like the political production of truth and knowledge or anything like that but this anxiety feels more cohesive and legitimate than I initially figured
20 notes · View notes
political-us · 3 months ago
Text
Last Week Tonight with John Oliver -Facebook and Content Moderation
youtube
25 notes · View notes
justinspoliticalcorner · 1 month ago
Text
Christopher Wiggins at The Advocate:
Social media platforms are failing LGBTQ+ users—and in some cases, actively endangering them. That’s the conclusion of GLAAD’s 2025 Social Media Safety Index, released Tuesday, which shows that protections for LGBTQ people online have eroded dramatically over the past year, especially on platforms owned by Meta and Google. The report, now in its fifth year, delivers a sobering snapshot of an online environment increasingly hostile to LGBTQ expression, particularly for transgender and nonbinary people. GLAAD’s scorecard, which evaluates six major platforms—TikTok, Facebook, Instagram, Threads, YouTube, and X—found that every company received a failing score. TikTok was rated highest at just 56 out of 100. X, formerly Twitter, scored the lowest at 30. “At a time when real-world violence and harassment against LGBTQ+ people is on the rise, social media companies are profiting from the flames of anti-LGBTQ+ hate instead of ensuring the basic safety of LGBTQ+ users,” GLAAD President and CEO Sarah Kate Ellis said in a statement. “These low scores should terrify anyone who cares about creating safer, more inclusive online spaces.” The most alarming developments detailed in the report include Meta’s overhaul of its “Hateful Conduct” policy to permit users to characterize LGBTQ+ people as mentally ill—language GLAAD says echoes political and religious attacks aimed squarely at the trans community. YouTube, meanwhile, removed “gender identity and expression” from its list of protected characteristics under its hate speech policy, a quiet but significant policy change that leaves trans users vulnerable to targeted abuse, the report notes.
GLAAD’s 2025 Social Media Safety Index report reveals that safety of LGBTQ+ people on social media outlets, especially on those owned by Meta and Google, have plummeted.
See Also:
GLAAD: 2025 Social Media Safety Index
10 notes · View notes
mannyblacque · 4 months ago
Text
Tumblr media Tumblr media
7 notes · View notes
reading-writing-revolution · 5 months ago
Text
Tumblr media Tumblr media Tumblr media
A thread on Zuckerberg's bullshit with Joe Rogan, America's douchebro.
10 notes · View notes
gwydionmisha · 4 months ago
Text
youtube
Content Moderation: Last Week Tonight with John Oliver (HBO)
7 notes · View notes
kilov3books · 4 months ago
Text
Is Social Media Content Moderation, SILENCING FREE SPEECH?
Tumblr media
Or Protecting Civil Rights
By: Ki Lov3 Editor: Toni Gelardi © Feb 6, 2025
While some argue that content moderation restricts free speech, it actually serves as a safeguard—reinforcing civil rights and equality, just as hate crime laws protect individuals in physical spaces. In society, laws are in place to prevent hate crimes and protect vulnerable groups from discrimination because they recognize that certain speech and actions cause harm. If we protect people from hate-fueled attacks in real life, why should we allow them online, where harm can be even more severe? Social media has become a central part of public discourse, shaping conversations, movements, and opinions, but it has also made it possible for hate speech, harassment, and misinformation to spread.
Tumblr media
Cyberbullying and hate speech are linked to increased depression, anxiety, and self-harm. A study from the Journal of Medical Internet Research found that victims of online harassment are more than twice as likely to engage in self-harm or suicidal behaviors compared to those who are not targeted. Unlike in-person bullying, online harassment is persistent, anonymous, and amplified, often making it inescapable.
Increased rates of anxiety, depression, and self-harm have been connected to hate speech and cyberbullying. A study from the Journal of Medical Internet Research found that victims of online harassment are more than twice as likely to engage in self-harm or suicidal behaviors compared to those who are not targeted. Unlike in-person bullying, online harassment is persistent, anonymous, and amplified, often making it inescapable.
Why Moderation is Not Censorship
Just as businesses can deny service to abusive customers, social media companies have the right—and obligation—to moderate harmful content. This does not silence opinions; rather, it ensures that discussions remain inclusive. Without it, hate speech and harassment push marginalized voices out of public discourse. True free speech necessitates an environment where people can engage without fear of abuse. Free speech laws prevent the government from suppressing speech, not private platforms from enforcing policies to create a safer environment.
Tumblr media
Effective moderation should:
Apply rules consistently to all users.
Focus on harm reduction, not suppressing debate.
Be developed with input from civil rights experts.
Include an appeals process to ensure fairness.
In conclusion, through the prevention of online spaces from becoming venues for harm, social content moderation safeguards civil liberties. We shouldn't let hateful attacks online, just as we wouldn't tolerate them in person. By guaranteeing that all voices are heard, the creation of safe online environments enhances free speech rather than diminishes it.
3 notes · View notes
monetizeme · 5 months ago
Text
“The announcement from Mark is him basically saying: ‘Hey I heard the message, we will not intervene in the United States,’” said Haugen.
Announcing the changes on Tuesday, Zuckerberg said he would “work with President Trump” on pushing back against governments seeking to “censor more”, pointing to Latin America, China and Europe, where the UK and EU have introduced online safety legislation.
Haugen also raised concern over the effect on Facebook’s safety standards in the global south. In 2018 the United Nations said Facebook had played a “determining role” in spreading hate speech against Rohingya Muslims, who were the victims of a genocide in Myanmar.
“What happens if another Myanmar starts spiralling up again?” Haugen said. “Is the Trump state department going to call Facebook? Does Facebook have to fear any consequences from doing a bad job?”
4 notes · View notes
shoutgraphics · 5 months ago
Text
Tumblr media
Social Media
Let's Talk About It
Shout! Graphics has never shied away from sharing and expressing political opinions. As a designer, I believe my education and experience have equipped me to convey messages in a clear, impactful, and easily digestible way. Regardless of the size of my platform, I won’t let that ability go to waste. Recently, things have become more complicated, and I want to take a moment to speak—not as a brand, but as an individual.
Over the past decade, social media platforms have significantly restricted organic reach—the ability for others to see posts without requiring content creators to pay for promotion. Despite these changes, these platforms remain essential as search engines. While I’d love for all my clients to find me directly through my contact page, this isn’t a realistic approach. As a small freelancer, I must maintain an active presence, to some extent, on all major platforms.
The ethics of this obligation grow more complicated every day, and I suspect this won’t be the last statement of its kind from a brand like mine. We are all striving to find a balance. Being present on these platforms means supporting corporations that often prioritize toxic environments and misinformation over a commitment to truth. Platforms like Twitter/X have become significant roadblocks in this regard. While the tension feels particularly intense now, it’s important to recognize that this is not a new phenomenon.
Since at least 2015, companies like Meta have provided a haven for bad-faith actors—often at great cost. Despite this, I hoped that platforms like Threads and Instagram could serve as better alternatives for maintaining a necessary social media presence. However, Meta's recent announcements regarding content moderation have made it clear this is no longer a viable option.
_
I will be changing how I use my social media accounts on a case-by-case basis. 🔗 Learn more about that here: shoutgraphics.design/social-media-policy/
2 notes · View notes
tomorrowusa · 4 months ago
Text
youtube
John Oliver did a major piece on Mark Zuckerberg's Meta. The main but not singular focus is content moderation. Zuck's total capitulation to MAGA means that real content moderation is now a thing of the past there.
In his typical way, John does a funny takedown of Mark Zuckerberg. It's worth watching the vid just for those bits.
Near the end he tells people who are still on Facebook, Instagram, and other Meta platforms how to make it less valuable to MAGA Meta Mark. It's your way to "defund Meta".
For your convenience, here's the link...
How to change your settings to make yourself less valuable to Meta
Of course you should just leave Meta entirely. But if you're still Zuck-curious then that's a fair first step. Share that link with people still using Zuck platforms.
One of the the things John Oliver also recommends at that link is using the Firefox browser. Firefox by Mozilla is the best general use browser for privacy and I use it myself. Chrome is just a vacuum cleaner for data for Google.
27 notes · View notes
kingofmyborrowedheart · 6 months ago
Text
Seeing all of these big tech companies bending the knee to Trump is really something.
4 notes · View notes
justinspoliticalcorner · 5 months ago
Text
Christopher Wiggins at The Advocate:
Meta, the parent company of Instagram, Facebook, and Threads, under the leadership of CEO Mark Zuckerberg, has overhauled its content moderation policies, sparking outrage among LGBTQ+ advocacy groups, employees, and users. The company now permits slurs and dehumanizing rhetoric targeting LGBTQ+ people, a shift critics say is a deliberate alignment with far-right agendas and a signal of its disregard for marginalized communities’ safety. Leaked training materials reviewed by Platformer and The Intercept reveal that moderators are now instructed to allow posts calling LGBTQ+ people “mentally ill” and denying the existence of transgender individuals. Posts like “A trans person isn’t a he or she, it’s an it” and “There’s no such thing as trans children” are deemed non-violating under the new policies. Use of a term considered a slur to refer to transgender people is also now permissible, The Intercept reports. The changes, which include removing independent fact-checking and loosening hate speech restrictions, closely resemble Elon Musk’s controversial overhaul of Twitter, now X. Zuckerberg framed the updates as a return to Meta’s “roots” in free expression, but advocacy groups argue the move sacrifices safety for engagement.
Meta has thrown away any and all of its remaining goodwill this week by pandering to anti-LGBTQ+ and anti-DEI jagoffs, such as permitting defamatory slurs towards LGBTQ+ people.
See Also:
LGBTQ Nation: Meta employees speak out against new anti-LGBTQ+ & anti-DEI policies
12 notes · View notes